Diffusion models, which learn to reverse a signal destruction process to generate new data, typically require the signal at each step to have the same dimension. We argue that, considering the spatial redundancy in image signals, there is no need to maintain a high dimensionality in the evolution process, especially in the early generation phase. To this end, we make a theoretical generalization of the forward diffusion process via signal decomposition. Concretely, we manage to decompose an image into multiple orthogonal components and control the attenuation of each component when perturbing the image. That way, along with the noise strength increasing, we are able to diminish those inconsequential components and thus use a lower-dimensional signal to represent the source, barely losing information. Such a reformulation allows to vary dimensions in both training and inference of diffusion models. Extensive experiments on a range of datasets suggest that our approach substantially reduces the computational cost and achieves on-par or even better synthesis performance compared to baseline methods. We also show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024\times1024$ resolution from 52.40 to 10.46. Code and models will be made publicly available.
translated by 谷歌翻译
本文提出了一种新颖的统一特征优化(UFO)范式,用于训练和在现实世界和大规模场景下进行深层模型,这需要集合多个AI功能。不明飞行物的目标是通过对所有任务进行大规模预修。与众所周知的基础模型相比,UFO具有两个不同的重点,即相对较小的模型大小,没有适应性成本:1)UFO以多任务学习方式将广泛的任务挤入中等尺寸的统一模型中并在转移到下游任务时进一步修剪模型大小。 2)不明飞行物不强调转移到新任务。相反,它旨在使修剪模型专门用于一个或多个已经看到的任务。有了这两个特征,UFO为灵活的部署提供了极大的便利,同时保持了大规模预处理的好处。 UFO的一个关键优点是修剪过程不仅可以减少模型的大小和推理消耗,而且还提高了某些任务的准确性。具体而言,UFO考虑了多任务培训,并对统一模型产生了两倍的影响:一些密切相关的任务具有相互利益,而某些任务相互冲突。不明飞行物设法通过新颖的网络体系结构搜索(NAS)方法来减少冲突并保留相互利益。对各种深度表示学习任务(即面部识别,人重新识别,车辆重新识别和产品检索)的实验表明,从UFO中修剪的模型比单件任务训练的对应物更高,但却具有更高的准确性较小的型号大小,验证不明飞行物的概念。此外,UFO还支持发布170亿个参数计算机视觉(CV)基础模型,该模型是该行业中最大的CV模型。
translated by 谷歌翻译
通过区分真实和合成样品,鉴别器在训练生成对抗网络(GAN)中起着至关重要的作用。尽管实际数据分布保持不变,但由于发电机的发展,合成分布一直变化,从而影响鉴别器的BI分类任务的相应变化。我们认为,对其容量进行即时调整的歧视者可以更好地适应这种时间变化的任务。一项全面的实证研究证实,所提出的培训策略称为Dynamicd,改善了合成性能,而不会产生任何其他计算成本或培训目标。在不同的数据制度下开发了两个容量调整方案,用于培训gan:i)给定足够数量的培训数据,歧视者从逐渐增加的学习能力中受益,ii)ii)当培训数据受到限制时,逐渐减少层宽度的宽度减轻。歧视者的过度问题。在一系列数据集上进行的2D和3D感知图像合成任务的实验证实了我们的动力学的普遍性以及对基准的实质性改进。此外,Dynamicd与其他歧视器改进方法(包括数据增强,正规化器和预训练)具有协同作用,并且在将学习gans合并时会带来连续的性能增长。
translated by 谷歌翻译
神经网络的等级测量跨层流动的信息。它是关键结构条件的一个实例,适用于机器学习的广泛领域。特别是,低排名特征表示的假设会导致许多体系结构中的算法发展。然而,对于神经网络,产生低级别结构的内在机制仍然模糊不清。为了填补这一空白,我们对网络等级的行为进行了严格的研究,尤其关注排名不足的概念。从理论上讲,我们从差分和代数组成的基本规则中建立了通用的单调降低属性,并发现网络块和深度函数耦合的等级缺陷。借助我们的数值工具,我们提供了对实际设置中网络等级的每层行为的首次经验分析,即ImageNet上的重新NET,DEEP MLP和变压器。这些经验结果与我们的理论直接一致。此外,我们揭示了由深网的排名不足引起的一种新颖的独立赤字现象,在这种情况下,给定类别的分类信心可以通过少数其他类别的信心来线性地决定。这项工作的理论结果以及经验结果可能会提高对深神经网络固有原理的理解。
translated by 谷歌翻译
尽管在生成对抗网络(GAN)的潜在空间中,语义发现迅速发展,但现有方法要么仅限于找到全局属性,要么依靠许多细分掩码来识别本地属性。在这项工作中,我们提出了一种高效的算法,以分解甘恩学到的关于任意图像区域的潜在语义。具体而言,我们重新审视了预先训练的gan的局部操纵任务,并将基于区域的语义发现作为双重优化问题。通过适当定义的广义雷利商,我们设法解决了这个问题,而无需任何注释或培训。对各种最先进的GAN模型的实验结果证明了我们的方法的有效性,以及它优于先前艺术在精确控制,区域鲁棒性,实施速度和使用简单性方面的优势。
translated by 谷歌翻译
已经显示了生成的对抗网络(GaN)的潜在空间在某些子空间内编码丰富的语义。为了识别这些子空间,研究人员通常从合成数据的集合分析统计信息,并且所识别的子空间倾向于在全局控制图像属性(即,操纵属性导致整个图像的变化)。相比之下,这项工作引入了低秩的子空间,使得GaN生成更精确地控制。具体地,给定任意图像和一个感兴趣区域(例如,面部图像的眼睛),我们设法将潜在空间与雅各比矩阵相关联,然后使用低秩分解来发现可转向潜在子空间。我们的方法有三种可区分优势,可以恰当地称为低利纳诺。首先,与现有工作中的分析算法相比,我们的雅各比人的低级别分解能够找到属性歧管的低维表示,使图像编辑更精确和可控。其次,低级别分子化自然地产生空间的属性,使得在其内移动潜在的代码仅影响感兴趣的外部区域。因此,可以通过将属性向量投影到空空间中来简单地实现本地图像编辑,而不依赖于现有方法所做的空间掩模。第三,我们的方法可以从一个图像中鲁布布地与本地区域一起使用,以进行分析,但概括到其他图像,在实践中易于使用。关于各种数据集培训的最先进的GaN模型(包括Stylegan2和Biggan)的大量实验证明了我们的LowRankaN的有效性。
translated by 谷歌翻译
Recent work has shown that a variety of semantics emerge in the latent space of Generative Adversarial Networks (GANs) when being trained to synthesize images. However, it is difficult to use these learned semantics for real image editing. A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code. However, existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space. As a result, the reconstructed image cannot well support semantic editing through varying the inverted code. To solve this problem, we propose an in-domain GAN inversion approach, which not only faithfully reconstructs the input image but also ensures the inverted code to be semantically meaningful for editing. We first learn a novel domain-guided encoder to project a given image to the native latent space of GANs. We then propose domain-regularized optimization by involving the encoder as a regularizer to fine-tune the code produced by the encoder and better recover the target image. Extensive experiments suggest that our inversion method achieves satisfying real image reconstruction and more importantly facilitates various image editing tasks, significantly outperforming start-of-the-arts. 1
translated by 谷歌翻译
Graph Neural Networks (GNNs) have achieved promising performance on a wide range of graph-based tasks. Despite their success, one severe limitation of GNNs is the over-smoothing issue (indistinguishable representations of nodes in different classes). In this work, we present a systematic and quantitative study on the over-smoothing issue of GNNs. First, we introduce two quantitative metrics, MAD and MADGap, to measure the smoothness and oversmoothness of the graph nodes representations, respectively. Then, we verify that smoothing is the nature of GNNs and the critical factor leading to over-smoothness is the low information-to-noise ratio of the message received by the nodes, which is partially determined by the graph topology. Finally, we propose two methods to alleviate the oversmoothing issue from the topological view: (1) MADReg which adds a MADGap-based regularizer to the training objective; (2) AdaEdge which optimizes the graph topology based on the model predictions. Extensive experiments on 7 widely-used graph datasets with 10 typical GNN models show that the two proposed methods are effective for relieving the over-smoothing issue, thus improving the performance of various GNN models.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译